27 December, 2008

Making the Case for Materialism

From time to time I get sucked into debating with religious types on issues such as free will, ethics and materialism. The same arguments go round in circles and of course no one changes their mind about anything. But I thought it would be nice to have some mutual respect. That's a first step to someone changing their minds. To this end I wrote this short piece in response to a comment by a nice chap with the nickname of gpuccio.

If anyone, including gpuccio, wants to make any comments this seems like a good place.

I am not very interested in lengthy reruns of old arguments which have been done much better in philosophy journals. It is more a case of trying to convey the appeal and strength of materialism to me personally.

48 Comments:

Blogger oleg said...

I have a question for gpuccio. How would you check empirically whether a computer “freely generates consciousness and CSI”? How do you even measure (not calculate) CSI?

You have an open invitation to also reply at AtBC.

12:00 am  
Blogger gpuccio said...

Mark,

I have read your piece and I find it very sincere and fair. I appreciate that you express your personal reasons for being a materialist. as you know, I respect fully all personal choices regarding general views of reality, and my purpose is not to convince anybody.

I feel just the same that I owe you at least some comments. I will split them in separate posts, so that each is not too long.

First of all, I obviously agree that "Brain activity and mental activity are indisputably closely synchronised." That has always been obvious, and my only note is that modern neurophysiology has added nothing important "in principle" to what we have always known. But my personal explanation is not that "they cause each other", nor that "they are different ways of looking at the same thing". It is more in the line of: consciousness expresses itself through the mind and the brain, and receives its inputs through them. I prefer the analogy of a game player playing a sophisticated videogame, and totally absorbed in it, to the point of identifying his reality with the reality of the videogame. And please notice that for me consciousness (the transcendental I) is not the same thing as the mind (a formal instruments bewteen consciousness and the body and material world).

9:42 am  
Blogger gpuccio said...

I will not comment here on the points about the animal kingdom and the success of science in replacing some explanations with others. In general, your points here are reasonable. In part I agree, in part I should give answers based mainly on my personal faith, and that has never been my purpose. But none of those points would bring me to a materialist view of reality.

9:47 am  
Blogger gpuccio said...

The following points are instead the most important and debatable.

I agree with you that the biggest problem of all, for a materialist, is what you call "the difference between the internal and external view of a mental event or state", and others call "the hard problem of consciousness": in other words, why does subjective experience exist? I had focused my discussion on free will because that was the initial subject, but there is no doubt that the nature of consciousness is "the real thing".

In a sense, I cannot even agree with your formulation of thr problem: it is not "the difference between the internal and external view", because both an internal and an external view are still "views", that is internal representations in consciousness. That's what I call "the empirical priority of consciousness". All that we know and experience happens in consciousness, so consciousness is our first and main observable. All the rest is indirect.

So the question is rather: what is the difference between my consciousness (my first observed reality) and those objects in the material world which appear not to be conscious?

Please notice that I have always made a distinction between our knowledge of our personal existence as conscious beings (a direct, intuitive perception), and our knowledge that other human conscious beings exist (an inference derived from two different sources: a) the observation of objective beings who are similar to us, and have a similar behaviour, and b) the essential fact of our personal certainty that we are conscious).

So, to sum up, we are certain of our personal consciousnees because we perceive it directly, and we very reasonable infer a similar condition in ither human beings. That's my premise, and that's also one of the reasons why I don't like to extend the discussion to animals: while we can certainly make inferences about animal consciousness too, they are much more indirect, because there are certainly differences between animals and us, especially in behaviour, and any theory about them becomes much more debatable.

10:03 am  
Blogger gpuccio said...

Regarding your example of our arm: I do believe that our body is an object for our consciousness. It is different form an "external" object, becasue, a s you correctly say, there is a special relationship between our consciousness and the body, which is different from the relationship bewteen our consciousness and a table. Even the mind and its processes are objects for the consciousness, if we just observe them. The only transcendental component of consciousness if the subject, what I call "the transcendental I". The subject is the real mystery. We can debate if the nature of the mind is purely physical or if it involves principle which are not yet understood by science, but that is not the real problem. The real problem is the nature of the perceptor, the I, the simple center which experiences all, which suffers and enjoys, and which gives continuity and support to the concept of personal identity.

10:13 am  
Blogger gpuccio said...

I have not much to add about free will. we alredy know that you concept of free will is different from mine, and we agree that my concept is not compatible with a materialist view of consciousness.
You say: "Like most humans I make choices and decisions", but in your view those choices and decisions seem to be objectively determines, or at best happening randomly. For me that's not freedom, and that's not even choise. If the physical evolution of the physical system which is your brain is all that "chooses", there is not much to say. You say: "These choices and decisions are made to satisfy my desires and whims", but I would say that, in your view, both your desires and whims and choices and decisions are the same thing: states of a physical system. The mystery remains of why they seem to be perceived by an I, and why that I believes that he is choosing and deciding. So, everything brings us back to the "hard problem of consciousness".

11:01 am  
Blogger gpuccio said...

Finally, I am not much interested in the social definition of what is right or wrong. I agree with you that it's "a subjective or relative view of ethics", and I agree that it's very important, but that's not my main interest. I agree with you that outward moral rules must be in some way be discussed between people believing different things.

But I do believe, and am interested in, a personal sense of morality. I have raised the problem of loyalty, in your example abou RMI, because I believe that loyalty is one of the deepest values in personal morality. So, I ask you again: in your example about RMI, do you believe that your decision to betray your best friend in order to get a personal advantage, or in alternative to remain loyal to him, is completely determined, and could be in principle anticipated by your super-RMI? Or that it is a random event? In other words, that it has nothing to do with you, if not in the sense of all the objective constraints which are already implicit in you state?

Do you believe that such a choice is made only to "satisfy your desires and whims"? In a sense, I could agree that even a choice of self-sacrifice is made to satisfy a desire: in this case, the desire to be honest and loyal, and to keep some self-respect, for instance. But do you agree that the "I" perceives those two opposite desires (that for gratification of one's immediate needs, and that inherent in deeper values of the self) as different, and differently important? And that such a difference is exactly what is at the base of personal morality, and always has been?

Again that has nothing to do with desires and whims in themselves. We always have contrasting desires within ourselves. It has rather to do with the "meaning" of desires, with their value and with the different perspectives, both of reality and of ourselves, which do support those contrasting desires.

That's where moral choice happens. That's where moral choice has sense.

11:17 am  
Blogger gpuccio said...

oleg:

I have answers for you.

"How would you check empirically whether a computer “freely generates consciousness and CSI”?"

For consciousness, I think we should consider as a first step something like a true Turing test. For "true", I mean that the behaviour of the computer must truly give me a reasonable conviction that I am in front of a conscious being. For instance, the computer must be capable of true creative language.

As I said to Mark, our inference of the consciousness in other human beings is based really on two things:

a) Our personal perception of consciousness in ourselves.

b) The strong similarities in both physical structure and behaviour between us and other human beings.

In the case of a computer, a) remanins unchanged, but we can split b) in two:

b1)Physical similarities: that would not be satisfied by a computer, unless we realize a perfect android (have you ever read Dick?)

b2) That would be satisfied by a truly positive Turing test.

So the problem could be that, once satisfied b2), it could be debated if we empirically accept the presence of consciousness in absence of b1).

But I would like to procrastinate that problem, and stick for the moment to the main one. In other words: have you examples of a truly positive Turing test?

11:27 am  
Blogger gpuccio said...

oleg:

For CSI, it's easier. CSI is objectively observed. So, it would be enouigh to just observe it in the output, and be sure that it is not just a reshuffling of the CSI in the program. I am aware that our understanding of CSI has to become deeper to do that reliably, and I believe that Dembski and Marks are actively working to that with their papers on active information.

But, just to make a simple example, I have already mentioned the importance of language. This post, like yours, is an example of CSI in the form of written language with creative meaning. It is in many ways unique, not only for the special sequence of letters, but also for the special meanings expressed. Please, show me a computer which can output something like that.

In other words, show me a computer which can really talk with me, without having the words and sentences and concepts already written in the software.

11:33 am  
Blogger gpuccio said...

oleg:

"How do you even measure (not calculate) CSI?"

That's easy too. I have already debated that on UD. The simplest way is to express it as the complexity of a specified unit of information.

The concept of CSI has 3 components: complexity, specification, non necessity.

So, in brief:

1) you define the unit of information where you are going to measure the CSI. Let's say it is a protein.

2) you define the specification. As in biology we are intersted in functional specification, you define the function and the context for it. In the case of a protein, that could be a measurable level of some specific biochemical activity in the context of a living cell.

3) you veriy that the function is present in the unit, and that there is no known mechanism of necessity which could output the information present in the unit. For a protein, both steps are easy enough.

4) Now, you measure the complexity, in terms of the rate between the size of the target space (the number of possible proteins which exhibit the defined function at the defined level) and the whole search space. That will require approximations, and the search for upper ot lower limits, but it can be done, and it will be increasingly easier to do that as our understanding of protein improves.

The complexity so calculated is a measure of the CSI in that unit of information

11:43 am  
Blogger Mark Frank said...

I would say that, in your view, both your desires and whims and choices and decisions are the same thing: states of a physical system. The mystery remains of why they seem to be perceived by an I, and why that I believes that he is choosing and deciding.

This is an important example.

I don't perceive my decisions and choices. I don't believe (or not believe) that I am deciding or choosing. I just make decisions.

I might mentally rehearse the options, say to myself "yes that's a good move - I will do it this way" - but this activity isn't "the choice" (maybe I am still trying to convince myself). Or I might decide to go to the net in tennis without hesitation or internal debate. It is still a voluntary conscious choice.

I believe you are in the grip of a picture of language where every word has an object which it names (see the early part of Wittgenstein's Philosophical Investigations). In your case you are looking for the thing which is labelled by the word "decision" or "choice" (this is an example of what he meant by intelligence being bewitched by language).

11:49 am  
Blogger oleg said...

gpuccio,

the problem with your proposals is that they rely explicitly on human judgement instead of measurement.

For instance, the Wikipedia article on the Turing test specifically mentions this as a weakness: The Turing Test is explicitly anthropomorphic, testing only whether or not the computer resembles a human being, not if it is generally "intelligent" or "sentient".

On CSI you write: For CSI, it's easier. CSI is objectively observed. and then proceed to describe how it can be calculated. But read my question. I asked you specifically how to measure it experimentally, not calculate it theoretically.

Both of your procedures are subjective as they rely on a human judge. That is a big problem.

1:32 pm  
Blogger Mark Frank said...

Gpuccio

So, I ask you again: in your example about RMI, do you believe that your decision to betray your best friend in order to get a personal advantage, or in alternative to remain loyal to him, is completely determined, and could be in principle anticipated by your super-RMI? Or that it is a random event? In other words, that it has nothing to do with you, if not in the sense of all the objective constraints which are already implicit in you state?


Yes I believe my moral decisions are either determined or random. But that doesn't mean it nothing to do with me. It is my desires (including my conscience) which determine the outcome!

You believe in a person in addition to the physical body - this mysterious "I" beyond the one which has motives and does things. Remeber that I don't.

2:56 pm  
Blogger gpuccio said...

oleg:

"Both of your procedures are subjective as they rely on a human judge. That is a big problem."

I don't think so. All measurement procedures rely on human judges. The important thing is that there are rules about how to judge. You always have to set a context, to define a measure unit, to decide the level of accuracy of the measurement, to agree about what is being measured in a particular experimental context, and so on. CSI is no exception.

You say: "But read my question. I asked you specifically how to measure it experimentally, not calculate it theoretically."

But that's not true. I did exactly what you asked. How to measure it experimentally. The fact that some calculations are implied in the measurement does not make all that less experimental.

Let's review it:

1) You define the unit of information where you are measuring the CSI, in my example a protein. What's the problem with that? You have to know where you are measuring something. You have to know if you are measuring the length of a table, or of a bed. We are measuring CSI in a protein.

2) You define the specification: that protein has a function, and you define both the context (a living cell) and a measurement of the minimum function. And you verify that your protein has that function at at least that level in that context. Everything objective and scientific. You are saying: I will measure CSI in this protein because I have verified that it is specified: it has an objective function in an objective context, and I have measured it.

3) You have to be sure that no known mechanism of necessity must be able to generate that information (the protein sequence). That is a very important methodological step, and a very objective one. It also gives any "adversary" an easy chance to falsify the whole process of CSI evaluation.

4) Now you measure the complexity. I am proposing here a completely experimental measurement, which employs calculations. Where's the problem with that? You have to calculate the search space of that specific protein (the number of possible sequences of that length is a reasonable lower limit). That calculation is very real, and has nothing theoretical: those are really possible sequences, and we calculate their number. And we have to calculate the size of the target space. That is certainly more difficult, but in no way theoretical. For instance there is a very simple way to do that experimentally, although it is too long to be ever done: you could in principle generate all possible sequences of that length and just measure the function according to the defined level. In the end you would have a very empirical measurement.

As we cannot do that, and probably never will, we do what science always does: we make reasonable assumptions and approximations, according to reasonable models. And we get a result. That is possible, real, empirical and "improvable". As I have said, as our understanding of proteins improves, it will be ever easier to measure the size of the target space.

Amnd finally, you divide that size by the size of the search space. And you have a number. a measurement. Everything very clear and objective. Empirical. And possible.

Finally, just a note about the Turing test: I had always thought that it was the basis of all AI theory. So I ask you: out of a Turing test, how could you ever suspect that a machine is conscious? Would that just be a matter of faith? Or are you suggesting that all strong AI is not a scientific theory? In that case, you are more extreme than I am. After all, I only believe that it is a very bad scientific theory.

9:37 pm  
Blogger gpuccio said...

Mark:

"It is my desires (including my conscience) which determine the outcome!"

Again I don't understand: your desires and your conscience are determined, I suppose. They determine nothing, if not in the sense that they are intermediate states in a causal sequence.

But in the end you are right when you say: "You believe in a person in addition to the physical body - this mysterious "I" beyond the one which has motives and does things. Remeber that I don't."

I am going to respect that. And in a way I will stick to your principle: "if I get bored it doesn't mean I concede". And believe me, I am not using "bored" (your term) in any negative sense. It has all been great fun. But I believe we are reaching a point where we are just repeating the same things, which is something I hate...

9:44 pm  
Blogger oleg said...

gpuccio,

Your steps contain no measurements.

Step 1 simply declares We are measuring CSI in a protein.

In Step 2 someone decides subjectively whether the protein is "functional".

Step 3 is part of EF, not CSI. At any rate, making sure that no known mechanism of necessity must be able to generate that information (the protein sequence) is not an objective measurement. The outcome depends on our state of knowledge: if tomorrow we find such a mechanism the answer will change. That's why Dembski retired EF and switched to CSI.

In Step 4 you acknowledge that we have no idea about the size of the target space and suggest to "make reasonable assumptions and approximations, according to reasonable models". The problem is that reasonable models are not necessarily valid. Aristotle's physics was entirely reasonable, but careful experimental tests showed that he was wrong. That's why a model is not a substitute for an experiment.

So it looks like CSI is not a measurable quantity (like entropy) but is a mathematical abstraction. Not surprising: it was invented by a mathematician.

As to strong AI, it's not science but rather an amalgam of philosophy and engineering.

11:05 pm  
Anonymous Anonymous said...

Mark:

For the record, & with all due respect those who hold that opinion, there is reason to consider another side to the claim that materialism is a coherent worldview.

Summed up (with kind help of SB) at 195 in the UD thread:

____________

The purpose of reason is to lead us towards the truth, reliably detecting and correcting error along the way. However, according to materialism all things in the end reduce to only matter-energy and space-time; interacting per physical forces and chance circumstances. Therefore, reason, even if it exists (a questionable proposition for materialists), has no capacity to surmount the physical chains of cause-effect that produce and control it. These chains act through evolutionarily produced genetic and socio-cultural conditioning and constraints, which manifest themselves in what is nothing more than “the behavior of a vast assembly of nerve cells and their associated molecules” in our central nervous systems. As a direct result, materialistic reasoning is self-referential and inconsistent with itself; as can be shown through many illustrative cases: e.g. Marx, Freud, Skinner, Dawkins, Lewontin and Crick.

________________

This is of course not intended to stand on its own (just as is true of any executive summary). Accordingly, onlookers may cf here at 156 in the thread, and again my basic summary of the matter here.


G'day

GEM of TKI

12:35 pm  
Anonymous Anonymous said...

Oleg:

A footnote, from an old Physics teacher.

First, measurement is inherently comparison to a standard, expressed in differing forms: ratios, interval scales, ordinal scales, nominal [state-based] ones. Metrics therefore are both dependent on intelligent judgement and measurement on acts of such judfement.

CSI is based ab initio on a vector, expressible in part in the EF as an operational process based on a relevant aspect os an object/situation:

[1] necessity -- shown in little or no contingency [Y/N comparison],

[2] complexity per a reasonable bound [> 500 - 1,000 bits of info storing capacity,

[3] independent [esp. -- cf Yockey-Wickens circa 1980's -- functional] specification that puts into a tight target in the config space, [cf. Fisherian inference testing].

A vector can be constructed: [1/0, 1/0, [no bits of storage capacity]] Such is a valid, objective metric. (E.g. ASCII text beyond 143 characters in English passes. Know any cases of such CSI that fail to be designed?)

WmAD in 2005 constructed a model [p 24] [LHS] that compresses into an inequality:

χ = –log2[10^120·ϕS(T)·P(T|H)] > 1

LHS > 1, then CSI

G'day

GEM of TKI

1:01 pm  
Blogger oleg said...

Kairosfocus,

I'm not sure what you are trying to achieve by mentioning your credentials. Should I reciprocate?

I am familiar with Dembski's paper you cite. I discussed it with CJYman, Zachriel and others at Telic Thoughts. I explained there the problem with his approach: the number φ reflects a subjective factor—the knowledge of an observer—rather than objective information. Here is my response to CJYman:

"I never claimed an ability to measure CSI in a binary string, you did. You showed a trivial example on your blog (a string containing all ones). But that's about it. If I give any randomly looking pattern you will have no idea what to do with it, as my example demonstrated. You don't know how to calculate φ, the number of allowed strings. Without me telling you how I designed the number you had 2^66 possibilities. If, on the other hand, I told you that it was a famous mathematical constant, that number would shrink to something like 10. The answer depends on you knowing something about the design process.

The situation in biology is quite similar. If you want to evaluate the CSI of a protein molecule, you need to know the space of available configurations. If you model the protein as balls on sticks, you'll get an astronomically large number of states. If you add some knowledge about hydrogen bonding, the number of configurations shrinks dramatically, though it is still very large. What does that tell us? The CSI measure depends on the state of our knowledge about the system. Tomorrow we might discover some other regularity in protein folding and the space of available configurations will shrink again. Thus a high CSI may simply be an indicator of our ignorance and not of design."

4:20 pm  
Anonymous Anonymous said...

Oleg

I decided to pass back for a moment.

I note that you not really addressed the fact that -- per your demand -- I have outlined a relevant, simple metric for CSI. I also have cited one that you now admit to having known all along.

So, why did you ask for such if you knew all along that such metrics exist? (I hope it was not to try to score debate points? But otherwise, that seems a rather time and effort wasting exercise.)

Next, you now appeal to a possible future adjustment to the science, that possibly could alter our view of the config spaces in view for protein spaces -- note not fact but hope/faith. Also, we already know that 3-5 bonds are probably only 50% of unconstrained inter-amino acid bonds, and that AAs will probably tend to undergo breakdown or bond with non AA molecules in so-called prebiotic soups. that's part of the OOL conundrum for materialists.

How is your hoped for possibility different from the general finding that sciences are about provisional knowledge? (Isn' t that what Popper's falsifiability point is about, not to mention Lakatos's progressive/ degenerative programmes??)

And, should we find out that H-bonding dynamics on protein chains embed information that constrains such chains to be life-facilitating, that would tell us that the laws of chemistry have life written into them.

What would be the most reasonable explanation for such laws? [Especially since we already know that the universe in which we live is balanced on a knife's edge relative to radically non-life facilitating possible universes, and that the relevant contingency space is probably HUGE.]

In short, after going round and round the merry-go, we are right back at the same basic issues and implications.

GEM of TKI

9:45 pm  
Blogger Mark Frank said...

KF
Thanks for commenting.

For the record, & with all due respect those who hold that opinion, there is reason to consider another side to the claim that materialism is a coherent worldview.

You seem to be talking about the argument from reason. This started with CS Lewis (excellent novelist but lousy philosopher) who was famously refuted by G.E. Anscombe (a Catholic, but too intellectually honest to let a fallacious argument go by). Since then there have been various attempts to make it work and every attempt I have read has failed.

I am very busy right now - New Year's party at our house. But if you care to link to a paper which puts forward the argument from reason that you find convincing I will gladly respond in the New Year on any blog or platform you wish.

Happy NY

Mark

10:20 pm  
Blogger oleg said...

The difference, kairosfocus, is that physical variables are objective: length, mass, entropy, and strength of electric field do not depend on the state of knowledge of the observer. You can measure them with a stick, a scale, a calorimeter or a voltmeter.

I'd like to know what kind of device or experimental setup you are proposing to use in order to measure—not calculate—Dembski's CSI. I'm sure that, as a former physics teacher, you understand the difference between theory and experiment.

12:15 am  
Anonymous Anonymous said...

Mark

I took time to present a short precis here, with links to where my substantial argument is. (In so doing, I noted that it cannot stand on its own in a debate-tinged environment, as is the general characteristic of Executive Summaries. I gave links as a compromise with your short remarks only policy.)

You have not addressed the substantial case but instead chose to make a fallaciously dismissive appeal to the disreputableness of the claimed source of an argument without addressing the merits; never mind that it is Reppert, not Lewis, whom I explicitly cited; and that, on a sub-point of my cumulative case in the main.

FYI, C S Lewis -- who is a far better philosopher than many are willing to admit -- did adjust the initial form of his argument on recognising its limitations, and the adjusted form is far more formidable than you are willing to admit. Much less, Reppert's much more elaborated and technical form.

Astute onlookers will realise that to date neither here nor at UD have you actually engaged the substantial issues on materialism's characteristic problem that it cannot on evolutionary materialistic grounds account for the origin of the mind and its reasoning functions in ways that do not decisively undermine the credibility of said mind. And, I have named six very relevant specific high-profile cases in point.

For why I say that yet again, onlookers may wish to go here.

Kindly start from where I do: the implications of the Welcome to Wales example adapted from Taylor, on the merits.

G'day

GEM of TKI

7:30 am  
Anonymous Anonymous said...

Oleg:

I observe your: >>physical variables are objective: length, mass, entropy, and strength of electric field do not depend on the state of knowledge of the observer.>>

1 --> FYI, I gave you a metric that is a pseudo-vector, with three values measured in bits. (If you wish, by taking the product of the bit values, you can come up with a CSI metric in one figure, which will be in bits: CSI if above the 500 - 1,000 bit threshold in the context of being highly contingent, specified [especially functionally so] and beyond the extended form of the UPB. Or, you can use Dembski's more complex form.)

2 --> FYFI, bits are a common unit used in IT and comms theory. As a fundamentally logarithmic measure, they are based on both observations and calculations:

I = log [dj/pj], in bits [if the log is base 2, log2] . . . Eqn 1, my briefing note Section A

3 --> A base-2 log calculation relative to a priori and a posteriori values for probabilities of symbols [which depend on conventions and designed structures] in a comms system is obviously deeply dependent on the state of knowledge of the observer and his community of practitioners, as say Jaynes would point out.

4 --> Since info theory is a well accepted aspect of modern physical sciences, your objection is selectively hyperskeptical. (Indeed, it is possible to build a serious approach to statistical thermodynamics on an info theory basis, cf. Harry Robertson following Jaynes et al. In that approach, we can see that the metric for entropy is in fact deeply related to the state of knowledge of the observer. Not to mention the integral role of the (presumably knowing and certainly acting) observer in Relativity and Q-mech, on the predominant schools of thought over the past 80 years or so. (Without endorsement, cf. interesting reflections here.)]

5 --> More broadly, EVERY measurement process depends on perceptions, correct manipulations, calculations and judgements of comparison, i.e all measurements are relative to the knowledge of the measuring subject.

So, in the end, astute onlookers will see why this gambit comes across to me as a distractor, not a serious issue.

For the real issue, kindly start from the Welcome to Wales gedankenexperiment as already linked.

GEM of TKI

8:00 am  
Anonymous Anonymous said...

Jumping the hoop a third time (and as is discussed in my online note, Section A and Appendix 1):

I cite Robertson's Statistical Thermophysics:

>>. . . the standard assertion that molecular chaos exists is nothing more than a poorly disguised admission of ignorance, or lack of detailed information about the dynamic state of a system . . . . If I am able to perceive order, I may be able to use it to extract work from the system, but if I am unaware of internal correlations, I cannot use them for macroscopic dynamical purposes. On this basis, I shall distinguish heat from work, and thermal energy from other forms . . . >> [PP. vii - viii, PH, 1993.]

He discusses this in Ch 1, then cites Jayne:

>>". . . The entropy of a thermodynamic system is a measure of the degree of ignorance of a person whose sole knowledge about its microstate consists of the values of the macroscopic quantities . . . which define its thermodynamic state. This is a perfectly 'objective' quantity . . . it is a function of [those variables] and does not depend on anybody's personality. There is no reason why it cannot be measured in the laboratory." . . . . [cf. pp. 3 - 6, 7, 36.] >>

I trust that clarifies.

GEM of TKI

8:14 am  
Anonymous Anonymous said...

PS: I forgot.

I discussed the spontaneous synthesis of proteins in prebiotic soups above, as in the living cell the relevant macromolecule is the DNA which codes for proteins.

It is well known that the chemistry of chaining has little correlation to the informational content of the chain of AGCT monomers. And for the excellent reason that if the chain were so constrained by blind forces, it could store little or no information, and would end up as essentially similar to an orderly but information-poor crystal.

This, Orgel et al were discussing from the early 1970's. (Cf App 3, my online note.)

8:21 am  
Anonymous Anonymous said...

PPS: To make clear that the bit is both on the theory and the technology side, IT stands for "information technology."

There is no shortage of (sadly, often quite expensive) apparatus used to measure in bits and for that matter based on bits. (In short, information has now joined matter-energy and space-time as fundamental constitutent of our world. And -- as my online note points out at its outset -- this development and its implications are at the heart of the issue here.)

Not to mention, many valid metrics do not use masses of technical apparatus: measurement is not to be confused with hardware. (Just cf. your CPU usage statistics graph from Control Panel, assuming you are inflicted with Bill Gates. Software measuring systems resting on general purpose hardware, are artifacts of mind at work.)

Third, in general physical measurements embed more or less of theory; starting from "seeing X is seeing X as a case of P . . ."

(A practical case in point being the need for a valid theory of optics before Galileo's telescopic observations of astronomical objects could be deemed as reliable. Similarly in our day, astronomical distance measurements beyond the reach of parallax are increasingly embedded with theoretical considerations and calculations as they progress outwards.)

So, the temptingly easy dichotomising of theoretical and practical work is fallacious.

8:47 am  
Blogger Mark Frank said...

KF

You wrote:

You have not addressed the substantial case but instead chose to make a fallaciously dismissive appeal

and

Astute onlookers will realise that to date neither here nor at UD have you actually engaged the substantial issues on materialism's characteristic problem that it cannot on evolutionary materialistic grounds account for the origin of the mind and its reasoning functions in ways that do not decisively undermine the credibility of said mind.

I would imagine most astute on-lookers have moved on by now. However, you are right. I have not attempted any case at all - not even a fallacious one - because I haven't got the time.

I will address this over the next day or two. You seem to want me to use your exposition of the argument from reason - so that is the one I will work with.

Mark

8:59 am  
Anonymous Anonymous said...

Mark

Thanks for taking time to reply. I am willing to wait for substantial details.

However, kindly note that I have explicitly stated and requested:

1 --> The AFR is a small component of a cumulative case. [Context: one strand of cotton or sisal fibre is short and weak. But, twist any together into threads,and counter-twist and you get a long, strong rope. Just so, many empirically based inductive arguments on inference to best explanation gain their full cogency from cumulative effects of the interactions of their component sub-arguments. Why? In a nutshell, because the overall probability of error on evidence and inferences sharply reduces as the number of relatively independent but mutually supportive cases that would all have to be wrong multiplies.]

2 --> I have pointed out six named high-profile examples of the descent into self-referential incoherence that I have identified as a common challenge of evolutionary materialistic systems of thought. [Context: this is real and on-the-ground.]

3 --> I have asked you to start from a substantial addressing of the Welcome to Wales example that begins my main discussion, as I did at the UD threads. [Onlookers, context: when I brought it up here at point 3 (and cf, point 2 where i give a summary and bring up Plantinga and Reppert as secondary, supportive points), MF tried to turn the thought experiment into something else that he thought he could more easily dismiss. I pointed out that this both ducks the material issue and shows that his counter-example implicitly accepts the reliability of the explanatory filter -- which of course points to the precise sort of evident design in nature that would decisively undermine his materialism as a proposed "best explanation."]

So, please, start from Welcome to Wales. I will be happy to wait on you; though I find the artificiality of the length constraint here somewhat less than helpful.

G'day again

GEM of TKI

11:42 am  
Anonymous Anonymous said...

PS: I think it is significant to note how my remarks were excerpted above in MF's last comment:

As cited: >> You have not addressed the substantial case but instead chose to make a fallaciously dismissive appeal>> leading to the claim >>I have not attempted any case at all - not even a fallacious one >>

As written: >>You have not addressed the substantial case but instead chose to make a fallaciously dismissive appeal to the disreputableness of the claimed source of an argument without addressing the merits; never mind that it is Reppert, not Lewis, whom I explicitly cited; and that, on a sub-point of my cumulative case in the main.

FYI, C S Lewis -- who is a far better philosopher than many are willing to admit -- did adjust the initial form of his argument on recognising its limitations, and the adjusted form is far more formidable than you are willing to admit. Much less, Reppert's much more elaborated and technical form. >>

That, in the context of MF's earlier: >>You seem to be talking about the argument from reason. This started with CS Lewis (excellent novelist but lousy philosopher) who was famously refuted by G.E. Anscombe (a Catholic, but too intellectually honest to let a fallacious argument go by). Since then there have been various attempts to make it work and every attempt I have read has failed.>>

I think the context of the cited makes it very evident that MF's claim that he was not making an argument is unfortunately materially inaccurate to the substantial point. He has in fact made an argument, and it was fallacious: an attempted genetic fallacy based dismissal. [Sigh: Cf an unfortunately similar issue here at point 1, on the use of "vitriol."]

So, kindly pardon a point of correction.

12:03 pm  
Anonymous Anonymous said...

PPS: For those interested in AFR (which comes up in a secondary stage to the issue), I suggest preparatory reading here (interview with VR), here and here as a starter. The last in brief part addresses the tendency to dismiss CSL also.

12:28 pm  
Blogger oleg said...

Kairosfocus,

The dichotomy between theory and experiment is the crucial difference of modern science from its predecessor—philosophy. If theory is not verified by experiment, it isn't scientific.

However we define entropy mathematically, it can also be measured in a completely objective manner. You start with a system at a T, attach to it a calorimeter, and warm it up to a higher temperature T+ΔT, measuring the amount of heat ΔQ required to do so. The entropy change of the system ΔS = ΔQ/T. You can thus determine experimentally the change in the entropy of a system. Crystalline ice has residual entropy associated with proton disorder. Linus Pauling calculated the amount of that entropy. His result was verified by an experimental measurement as described above.

I contend that Dembski's CSI is not a physical quantity in the sense that it cannot be measured. If you wish to dispute that, please describe a measurement procedure.

4:02 pm  
Anonymous Anonymous said...

Oleg

I have already said enough to highlight that information and its measures are a recognised, objective and measurable entity in science and technology over these sixty years now. Immediately, it is not a well-warranted claim to dismiss an information metric as "non-physical."

Since you have decided to dispute me on issues of what entropy is about, I again point out that there is an entire school in physics that draws connexions between information and entropy based on the issue of information on the microstructure of systems, once we move beyond classical thermodynamics to the microstate-based statistical view.

I will also pause to point out that the Clausius-based metric you cite is -- as you acknowledge -- a relative one: dS >/= d'Q/T.

Immediately, we should see a key theoretical component in this metric: only in the quasi-static, reversible, equilibrium case is there an equality; but that is taken advantage of as S is a state not a path function. In short, a wealth of theory, assumptions and the mathematics of multivariable functions are implicitly embedded in the apparently simple and more or less directly measurable expression and associated measurements.

For further instance, entropy tables are usually in the end related to the elemental states of matter or key compounds such as H2O under standardised conditions -- again, a wealth of implicit theory.

In short, dS and the sort of lab measurements you cite are not a fundamental or direct view. Instead, they embed a large range of theoretical issues and assumptions that often do not come out sufficiently in how lab measures are presented. The statistical approach takes us to a more fundamental view; when that happens, issues over information come out as a significant implicit or explicit consideration, as Jaynes and Brillouin etc have highlighted over the past 50 - 60 years.

The underlying point -- and recall, it was a significant issue in Galileo's day too -- is as I have made: observations embed significant, often unacknowledged theoretical components. "Seeing X is seeing X as P . . . " (Cf Lakatos on belts and cores in research programmes.)

And, that is an issue that leads straight to the considerations of phil of science, thence general philosophical issues under epistemology, logic and metaphysics.

So, on fair comment, the matters are by no means as simple or dismissive as you evidently imagine. And, in particular, Information (in its various guises) is comparably observable and measurable to entropy.

GEM of TKI

6:50 am  
Anonymous Anonymous said...

PS: Dembski is not the author of the CSI concept, which was identified in the molecular biological-chemical context as early as 1973 -- when WmAD was in high school -- by well-known OOL researcher, Leslie Orgel:

>> Living organisms are distinguished by their specified complexity. Crystals fail to qualify as living because they lack complexity; mixtures of random polymers fail to qualify because they lack specificity.6 [Source: L.E. Orgel, 1973. The Origins of Life. New York: John Wiley, p. 189.] >>

What Dr Dembski has done is to provide a quantitative model that also in part supplies a family of metrics. But, that is a supplement to an objective, observed physical fact.

namely, the objectively observable functional reality of informational macromolecules is as physical as is the difference between a crystalline solid, a random-polymer based chemical tar and the intricately algorithmic processes and required key-lock fitting informational macromolecules in a functional cell; based on information storage capacities that start at 100,000 - 500,000 4-state elements.

7:10 am  
Blogger Mark Frank said...

KF - re Welcome to Wales. I started a new thread on this.

8:23 am  
Blogger oleg said...

Kairosfocus,

You have forgotten to add a qualifier: it is Shannon information that is "a recognised, objective and measurable entity in science and technology" and is related to entropy. You can measure it, but you can't measure CSI.

Not every quantity in a physical theory is measurable, even in principle. One example would be the electrostatic potential: you can raise the potential everywhere by 1 volt and the physical state will be the same. Only potential differences (more precisely, the electric field) is measurable. A particle's wavefunction in quantum mechanics is another example. So yes, we do use abstract mathematical entities in physics.

What distinguishes a physical theory from pure mathematics is the possibility to verify it experimentally. The example of proton disorder in ice illustrates that. Pauling's theory predicted how much entropy is associated with that disorder. On the basis of that, thermodynamics tells how much heat is required to warm it up adiabatically by 1 kelvin. That's a theoretical prediction. Experimental tests confirmed it.

In contrast, Dembski's theory cannot be tested experimentally. There are no quantities in it that are amenable to experimental measurement, period. If you think otherwise, tell me what quantity that would be, what Dembski's theory predicts for it, and how you would perform the experimental measurement.

2:53 pm  
Anonymous Anonymous said...

Oleg

"Shannon Information" is -- more or less -- a metric of information-carrying capacity; usually in bits.

We can then apply that by setting an objectively identifiable delimited context of the capacity that we focus on. (We do this all the time when we, say, analyse the carrying capacity of an information channel in the face of Noise. We have to make an inference to design there, to differentiate signal [message, due to art] and noise [mimic, due to chance + necessity in nature] -- indeed, that is fundamental to the definition of signal and the basic metric for information in the signal, as I already highlighted in Eqn 1 as linked and previously cited. Cf discussion in Section A my online note.)

So, back to the little vector metric:

[1] [L] degree of contingency of the relevant aspect, 1 if high. (We routinely recognise and detect then characterise lawlike mechanical necessity from the presence of low contingency in an aspect of a class of circumstances.)

[2] [S] independently and tightly Specified [I.e. how big, relatively speaking, is the cluster of configs that would fit the relevant description? How vulnerable is functionality to perturbation?) 1 for specified.

[3] [C] Associated information-carrying capacity, aka complexity: 1 if at least 500 - 1,000 n=bits.

Each of these is in relevant cases objectively observable, i.e. is manifest in empirical reality.

Now, take the product L * S * C; for our rough and ready CSI (or, more relevantly, FSCI) metric. This gives us a metric that is a single number and is contextual to CSI, in bits (or bits^3 if you want to be strict). WmAD has done the same sort of thing at a more sophisticated level, but the above is enough to show the cogency of the underlying reasoning.

In short your not physical/"only subjective" (for, mental) objection fails and is yet another manifestation of inconsistently selective hyperskepticism.

GEM of TKI

6:38 am  
Anonymous Anonymous said...

Now, for a real issue of clarification:

Above, Oleg suggested that H-bonding etc sharply delimited the proteinome config space. In initially responding, I pointed out the distinction between OOL and in-vivo circumstances, but somehow confused nucleotides and amino acids in my discussion. A jog from memory has led me to make adjustments as below.

1 --> Amino Acids used in proteins, as a rule -- we deal with modularity that uses clever irregularities [a hallmark of really good design] -- have the structure H3NCHRCOOH. (The central C-atom is the alpha carbon, and the R group varies from one protein to the other; the major exception being proline that has a side-chain art R that bonds back into the N-atom, giving a certain useful structural rigidity.)

2 --> As a result, generally, observed proteins simply chain by condensation, with the NH# bonding to COOH, dropping out one H2O each peptide bond. (There is a major exception, used to protect the protein from chemical attack, based on a side branch active group) R-groups simply point off to one side.

3 --> Thus, for so-called primary structure, chaining is more or less combinatorial, as for instance was observed by Bradley, Kok et al [distribution of peptide bonds from one aa to another in various proteins approaches a flat distribution], which led Kenyon to publicly recant his famous 1969 biochemical predestination thesis.

4 --> No surprise, the point of the chaining game is flexibility with modularity -- but not slavish locking in to inferior solutions.

5 --> It is post-chaining that several layers of folding and agglomeration structure emerge through H-bonding, cross-linking, structural interference, helical coiling, formation of sheets, etc. This ends up in the key-lock fitting that is key to much of the work of the cell.

6 --> in short, the H-bonding etc are associated with the identification of a narrow target zone of function in a much broader chemically accessible config space for the chain. (And we have also got to deal with not only other aa's but other possible species that could join the chain in an OOl situation, or terminate it. Indeed, even being in H2O can lead to chain disintegration. And, heat, notoriously denatures -- cooks -- many proteins, starting only a few degrees over 37C.]

7 --> That is, the proteinome is an instance of the narrow target zone in a broad sea of possible but overwhelmingly non-functional configs. And so it makes sense that the cell uses a template molecule with associated codes and execution machinery to force useful protein formation through a complex process that depends on prior proteins.

That of course highlights the role of DNA and RNA, and the issue then becomes whence such codes, algorithms and targetting of such a complex and narrow functional target (all of which are known artifacts of intelligence in many observed cases). Thence the conundrums of OOL research, and those of body-plan level macro-evo.

GEM of TKI

7:05 am  
Blogger oleg said...

Kairosfocus,

It doesn't look like you're going to outline an experimental procedure for measuring CSI (or any other quantity that Dembski deals with). You can measure the amount of Shannon information in bits, but then you need a human to judge whether that information is complex and specified. Both of those factors are entirely subjective: they reflect the knowledge of the observer and not some fundamental property of the object.

Let me illustrate that with an example. Here is a sequence of 60 bits: 1100100100 0011111101 1010101000 1000100001 0110100011 0000100011. Can you tell me whether this information is complex and specified?

3:00 pm  
Anonymous Anonymous said...

Oleg:

First, it is simply false -- as well as yet another red herring based, tangential strawman distractor from the balance on the merits above -- that I have not laid out a general, and operationalisable filtering process for assessing where CSI -- mre properly FSCI is to be found, and how we may infer to its source. An outline is found above, and I have linked or pointed to where much more can be had.

Furthermore, measurement is by definition a process of comparison to a relevant standard, often expressible in ratio, interval, ordinal or nominal scales -- as I have also pointed out. Comparison, in its fundamental state, is plainly adn inescapably based on intelligent judgements.

Mechanisation of measurements, per the design and development of suitable automated machinery, is even more riddled with such intelligent judgements and decisions. (I infer from your comments you are likely an engineer: if so, think about your experience of troubleshooting and debugging in a systems development situation. If not, speak with an engineer of your acquaintance who has had to develop a serious system that required significant instrumentation and analysis.)

In short, you are massively begging the question.

As to the proffered bit string, I will say simply this:

1 --> Show me that it functions in a context and we will begin to assess whether it is best explained as designed or a credible product of chance + necessity. (Contrast: if DNA were not functional and sensitive to perturbation, we would not be even considering the issue of whether it shows signs of intelligence.)

2 --> Further, you know if you have even cursorily read my remarks above, that unless we are looking at something that is at least 500 - 1,000 bits and is functional, or the equivalent in compressibility, we are not even in the ballpark where inference to FSCI (the key subset of CSI)is relevant.

3 --> DNA in observed living systems starts at 100,000 - 500,000 4-state elements [200 - 1,000 kbits], is algorithmically functional and code-bearing. Worse, the suysrtems jusrt lised depend on otehr living systems that start their DNA strings at about 1 mbases, for key inputs; i.e 1 Mbases is about the level for observationally credible first life.

Onlookers: it should be clear enough where the balance on the actual merits is, and it is increasingly clear to me from the sort of tangential rhetorical tactics we see, that this is a fruitless exchange not a serious dialogue.

G'day


GEM of TKI

7:27 am  
Blogger oleg said...

So kairosfocus, I ask you to actually calculate something and you bail out? Say hi to gpuccio.

Your excuse regarding functionality of the number sequence is not valid. I didn't ask you whether the sequence was functional. I asked you whether it was specified and complex. If you go back to Dembski's paper, he discusses specificity of number sequences beginning on page 9.

So can you tell whether my sequence is specified? Is it complex?

12:28 pm  
Blogger oleg said...

On the other hand, your request for more bits of the sequence is entirely reasonable. Let me know how many you need and I will supply them.

12:50 pm  
Anonymous Anonymous said...

Anyone who has to use a "word" like "operationalisable" is losing the argument

5:50 pm  
Blogger Mark Frank said...

Oleg

I really liked your offer to KF and I hope he accepts the challenge. It inspired me to create a new post for anyone to issue and accept challenges for calculating CSI and included my own even simpler example.

6:05 pm  
Blogger oleg said...

Hi Mark,

Glad to help. I, too, hope that someone will take a stab at these.

2:09 am  
Anonymous Anonymous said...

Oleg

That's enough.

I pointed out that the string is posted without a functional or compressibility context, and that -- per my rough and ready metric -- it is well below the threshold where CSI is even relevant. So, it is below the CSI/FSCI threshold, and falls off the bottom of the range even if functional in whatever context you may choose.

And that is in fact a simple calculation, in a context of makign empirical observation and indeed measurement: 60 bits not > 500 - 1,000. As well, I trust that we will not see hereinafter the idea that measurements and calculations, observations and theories are in praxis such separate categories that they do not intertwine inextricably in real empirical work.

Again, here is the result: NOT CSI, as too lacking in complexity, even if functional and contingent; and I am of course well aware of the Dembski sequence. As was shown step by step above, with explanation.

FYFI, the design inference is not a claim to a universal decoding algorithm. First, identify function, and contingency,t hen address specificity and complexity. Then, we may with profit estimate CSI as to presence and degree; if beyond the threshold. The crude vector and element-product metric I have outlined above is more than enough to discuss this at intelligent layman level, and not without some relevance to more sophisticated formulations; as I also discussed above.

Further discussion is therefore plainly fruitless.

G'day sir.

GEM of TKI

8:59 am  
Anonymous Anonymous said...

PS: I note too that I contrasted the case of bio-functional DNA, and gave the metric latterly referred to as Fits, for functional bits. 10^6 or so 4-state elements corresponds to 2 * 10^6 or so fits. That is deeply inside the territory of FSCI and presents a now longstanding conundrum to those who would try to explain it by chance + necessity on the gamut of our observed cosmos.

9:07 am  
Blogger oleg said...

Kairosfocus,

I have replied on the new thread.

2:35 pm  

Post a Comment

<< Home